20 research outputs found
AERoS: Assurance of Emergent Behaviour in Autonomous Robotic Swarms
The behaviours of a swarm are not explicitly engineered. Instead, they are an
emergent consequence of the interactions of individual agents with each other
and their environment. This emergent functionality poses a challenge to safety
assurance. The main contribution of this paper is a process for the safety
assurance of emergent behaviour in autonomous robotic swarms called AERoS,
following the guidance on the Assurance of Machine Learning for use in
Autonomous Systems (AMLAS). We explore our proposed process using a case study
centred on a robot swarm operating a public cloakroom.Comment: 12 pages, 11 figure
Soft Gripping: Specifying for Trustworthiness
Soft robotics is an emerging technology in which engineers create flexible
devices for use in a variety of applications. In order to advance the wide
adoption of soft robots, ensuring their trustworthiness is essential; if soft
robots are not trusted, they will not be used to their full potential. In order
to demonstrate trustworthiness, a specification needs to be formulated to
define what is trustworthy. However, even for soft robotic grippers, which is
one of the most mature areas in soft robotics, the soft robotics community has
so far given very little attention to formulating specifications. In this work,
we discuss the importance of developing specifications during development of
soft robotic systems, and present an extensive example specification for a soft
gripper for pick-and-place tasks for grocery items. The proposed specification
covers both functional and non-functional requirements, such as reliability,
safety, adaptability, predictability, ethics, and regulations. We also
highlight the need to promote verifiability as a first-class objective in the
design of a soft gripper.Comment: Updated the Standards subsection of paper. 9 pages, 2 figures, 1
table, 34 reference
On Specifying for Trustworthiness
As autonomous systems (AS) increasingly become part of our daily lives,
ensuring their trustworthiness is crucial. In order to demonstrate the
trustworthiness of an AS, we first need to specify what is required for an AS
to be considered trustworthy. This roadmap paper identifies key challenges for
specifying for trustworthiness in AS, as identified during the "Specifying for
Trustworthiness" workshop held as part of the UK Research and Innovation (UKRI)
Trustworthy Autonomous Systems (TAS) programme. We look across a range of AS
domains with consideration of the resilience, trust, functionality,
verifiability, security, and governance and regulation of AS and identify some
of the key specification challenges in these domains. We then highlight the
intellectual challenges that are involved with specifying for trustworthiness
in AS that cut across domains and are exacerbated by the inherent uncertainty
involved with the environments in which AS need to operate.Comment: Accepted version of paper. 13 pages, 1 table, 1 figur
Model Checking Goal-Oriented Requirements for Self-Adaptive Systems
Abstract—To deal with the increasing complexity and uncertaint
An Integrated Eclipse Plug-In for Engineering and Implementing Self-Adaptive Systems
A highly decentralized system of autonomous service components consists of multiple and possibly interacting feedback loops. These loops can be organized into a variety of architectural patterns. Although several authors have addressed the need to make feedback loops first-class entities, little attention has been given to providing solid tool support for their engineering and implementation. In this paper, we present SimSOTA - an integrated Eclipse plug-in tool to architect, engineer and implement self-adaptive systems based on our feedback loop-based approach. SimSOTA adopts model-driven development to model and simulate complex self-adaptive architectural patterns, and to automate the generation of Java-based implementation code for the patterns. The approach is validated using a case study in cooperative electric vehicles